Constrained Optimization is an important quantitative method in (non)linear programming and numerical optimization. It has been widely applied in molecular biology, meteorology, physical oceanography, industrial management, and many engineering fields (Bonnans et al.,2006: 5-10). The fundamental issue in a constrained optimization problem (COP) is to achieve the optimality under a given set of objective functions and parameter constraints, including equality and inequality constraints. When the objective function is linear, we refer to it as a "linear programming" problem. When the objective function is quadratic or higher order, we call it a ``nonlinear programming" problem.
Identification of the objective function is the first step in specifying a COP problem. We already completed this task in the previous section by specifying three objective functions under least squares and maximum likelihood paradigms. All three objective functions are nonlinear. Next, we need to specify proper constraints to achieve admissible parameter estimates that do not suffer from boundary violations. The basic set of the constraints for the three objective functions is defined by the beta coefficient ${{\hat{\beta }}_{m}}$. We can generalize those constraints by \begin{align*} &{{g}_{1}}={{\beta }_{0}}+\hat{y}_{\sim 0}^{\max }-b\le 0 \label{eq:q0} \\ &{{g}_{2}}=a-{{\beta }_{0}}-\hat{y}_{\sim 0}^{\min }\le 0 \notag\\ &{{g}_{3}}=\beta_{0}-b\le 0 \notag\\ &{{g}_{4}}=-\beta_{0}+a\le 0 \notag\\ & \qquad \qquad \qquad \vdots \notag \\ &{{g}_{2m+3}}={{\beta }_{m}}-\frac{b-\hat{y}_{\sim m}^{\max }}{x_{m}^{*\max }}\le 0 \notag\\ &{{g}_{2m+4}}=-{{\beta }_{m}}+\frac{a-\hat{y}_{\sim m}^{\min }}{x_{m}^{*\max }}\le 0. \notag \end{align*} For maximum likelihood estimation, we need to add two more constraints on the scale parameter $\sigma$ as \begin{align*} &{{g}_{2m+5}}=\sigma -b+a\le 0 \notag\\ &{{g}_{2m+6}}=-\sigma +\kappa\le 0. \notag \end{align*}